The proportion of time an animal is in a feeding behavioral state.
Process Model
\[Y_{i,t+1} \sim Multivariate Normal(d_{i,t},σ)\]
\[d_{i,t}= Y_{i,t} + γ_{s_{i,g,t}}*T_{i,g,t}*( Y_{i,g,t}- Y_{i,g,t-1} )\]
\[ \begin{matrix} \alpha_{i,1,1} & \beta_{i,1,1} & 1-(\alpha_{i,1,1} + \beta_{i,1,1}) \\ \alpha_{i,2,1} & \beta_{i,2,1} & 1-(\alpha_{i,2,1} + \beta_{i,2,1}) \\ \alpha_{i,3,1} & \beta_{i,3,1} & 1-(\alpha_{i,3,1} + \beta_{i,3,1}) \\ \end{matrix} \] \[logit(\phi_{Behavior}) = \alpha_{Behavior_{t-1}} \] The behavior at time t of individual i on track g is a discrete draw. \[S_{i,g,t} \sim Cat(\phi_{traveling},\phi_{foraging},\phi_{resting})\]
Dive information is a mixture model based on behavior (S)
\(\text{Average dive depth}(\psi)\) \[ DiveDepth \sim Normal(dive_{\mu_S},dive_{\tau_S})\]
Dive Profiles with Argos timestamps
Dive autocorrelation plots
## # A tibble: 11 x 4
## Animal n argos dive
## <int> <int> <int> <int>
## 1 131111 548 173 375
## 2 131115 1029 179 850
## 3 131116 1932 457 1475
## 4 131127 9275 2495 6780
## 5 131128 112 52 60
## 6 131130 1165 151 1014
## 7 131132 2292 589 1703
## 8 131133 8497 2078 6419
## 9 131134 2098 653 1445
## 10 131136 6844 1970 4874
## 11 154187 1866 486 1380
## [1] 35658
## # A tibble: 11 x 2
## Animal n
## <int> <int>
## 1 131111 454
## 2 131115 998
## 3 131116 1923
## 4 131127 8255
## 5 131128 106
## 6 131130 483
## 7 131132 1582
## 8 131133 6905
## 9 131134 1921
## 10 131136 6366
## 11 154187 1861
## # A tibble: 11 x 2
## Animal Tracks
## <int> <int>
## 1 131111 1
## 2 131115 2
## 3 131116 1
## 4 131127 26
## 5 131128 1
## 6 131130 2
## 7 131132 1
## 8 131133 35
## 9 131134 4
## 10 131136 13
## 11 154187 2
sink(“Bayesian/NestedDive.jags”) cat(" model{
pi <- 3.141592653589
#for each if 6 argos class observation error
for(x in 1:6){
##argos observation error##
argos_prec[x,1:2,1:2] <- argos_cov[x,,]
#Constructing the covariance matrix
argos_cov[x,1,1] <- argos_sigma[x]
argos_cov[x,1,2] <- 0
argos_cov[x,2,1] <- 0
argos_cov[x,2,2] <- argos_alpha[x]
}
for(i in 1:ind){
for(g in 1:tracks[i]){
## Priors for first true location
#for lat long
y[i,g,1,1:2] ~ dmnorm(argos[i,g,1,1,1:2],argos_prec[1,1:2,1:2])
#First movement - random walk.
y[i,g,2,1:2] ~ dmnorm(y[i,g,1,1:2],iSigma)
###First Behavioral State###
state[i,g,1] ~ dcat(lambda[]) ## assign state for first obs
#Process Model for movement
for(t in 2:(steps[i,g]-1)){
#Behavioral State at time T
phi[i,g,t,1] <- alpha[state[i,g,t-1]]
phi[i,g,t,2] <- 1-phi[i,g,t,1]
state[i,g,t] ~ dcat(phi[i,g,t,])
#Turning covariate
#Transition Matrix for turning angles
T[i,g,t,1,1] <- cos(theta[state[i,g,t]])
T[i,g,t,1,2] <- (-sin(theta[state[i,g,t]]))
T[i,g,t,2,1] <- sin(theta[state[i,g,t]])
T[i,g,t,2,2] <- cos(theta[state[i,g,t]])
#Correlation in movement change
d[i,g,t,1:2] <- y[i,g,t,] + gamma[state[i,g,t]] * T[i,g,t,,] %*% (y[i,g,t,1:2] - y[i,g,t-1,1:2])
#Gaussian Displacement in location
y[i,g,t+1,1:2] ~ dmnorm(d[i,g,t,1:2],iSigma)
}
#Final behavior state
phi[i,g,steps[i,g],1] <- alpha[state[i,g,steps[i,g]-1]]
phi[i,g,steps[i,g],2] <- 1-phi[i,g,steps[i,g],1]
state[i,g,steps[i,g]] ~ dcat(phi[i,g,steps[i,g],])
## Measurement equation - irregular observations
# loops over regular time intervals (t)
for(t in 2:steps[i,g]){
#first substate
sub_state[i,g,t,1] ~ dcat(sub_lambda[state[i,g,t]])
# loops over observed dive within interval t
for(u in 2:idx[i,g,t]){
#Substate, resting or foraging dives?
sub_phi[i,g,t,u,1] <- sub_alpha[state[i,g,t],sub_state[i,g,t,u-1]]
sub_phi[i,g,t,u,2] <- 1-sub_phi[i,g,t,u,1]
sub_state[i,g,t,u] ~ dcat(sub_phi[i,g,t,u,])
}
# loops over observed locations within interval t
for(u in 1:idx[i,g,t]){
zhat[i,g,t,u,1:2] <- (1-j[i,g,t,u]) * y[i,g,t-1,1:2] + j[i,g,t,u] * y[i,g,t,1:2]
#for each lat and long
#argos error
argos[i,g,t,u,1:2] ~ dmnorm(zhat[i,g,t,u,1:2],argos_prec[argos_class[i,g,t,u],1:2,1:2])
#for each dive depth
#dive depth at time t
divedepth[i,g,t,u] ~ dnorm(depth_mu[state[i,g,t],sub_state[i,g,t,u]],depth_tau[state[i,g,t],sub_state[i,g,t,u]])T(0,)
#Assess Model Fit
#Fit dive discrepancy statistics
eval[i,g,t,u] ~ dnorm(depth_mu[state[i,g,t],sub_state[i,g,t,u]],depth_tau[state[i,g,t],sub_state[i,g,t,u]])T(0,)
E[i,g,t,u]<-pow((divedepth[i,g,t,u]-eval[i,g,t,u]),2)/(eval[i,g,t,u])
dive_new[i,g,t,u] ~ dnorm(depth_mu[state[i,g,t],sub_state[i,g,t,u]],depth_tau[state[i,g,t],sub_state[i,g,t,u]])T(0,)
Enew[i,g,t,u]<-pow((dive_new[i,g,t,u]-eval[i,g,t,u]),2)/(eval[i,g,t,u])
}
}
}
}
###Priors###
#Process Variance
iSigma ~ dwish(R,2)
Sigma <- inverse(iSigma)
##Mean Angle
tmp[1] ~ dbeta(10, 10)
tmp[2] ~ dbeta(10, 10)
# prior for theta in 'traveling state'
theta[1] <- (2 * tmp[1] - 1) * pi
# prior for theta in 'foraging state'
theta[2] <- (tmp[2] * pi * 2)
##Move persistance
# prior for gamma (autocorrelation parameter)
#from jonsen 2016
##Behavioral States
gamma[1] ~ dbeta(5,2) ## gamma for state 1
dev ~ dunif(0.2,1) ## a random deviate to ensure that gamma[1] > gamma[2]
gamma[2] <- gamma[1] * dev
#Transition Intercepts
alpha[1] ~ dbeta(1,1)
alpha[2] ~ dbeta(1,1)
#Probability of init behavior switching
lambda[1] ~ dbeta(1,1)
lambda[2] <- 1 - lambda[1]
#Probability of init subbehavior switching
sub_lambda[1] ~ dbeta(1,1)
sub_lambda[2] <- 1 - lambda[1]
#Dive Priors
#Foraging dives
depth_mu[2,1] ~ dunif(50,250)
depth_sigma[1] ~ dunif(0,80)
depth_tau[2,1] <- 1/pow(depth_sigma[1],2)
#Resting Dives
depth_mu[2,2] ~ dunif(0,30)
depth_sigma[2] ~ dunif(0,20)
depth_tau[2,2] <- 1/pow(depth_sigma[2],2)
#Traveling Dives
depth_mu[1,1] ~ dunif(0,100)
depth_sigma[3] ~ dunif(0,30)
depth_tau[1,1] <- 1/pow(depth_sigma[3],2)
#Dummy traveling substate
depth_mu[1,2]<-0
depth_tau[1,2]<-0.01
#Sub states
#Traveling has no substate
sub_alpha[1,1]<-1
sub_alpha[1,2]<-0
#ARS has two substates, foraging and resting
#Foraging probability
sub_alpha[2,1] ~ dbeta(1,1)
sub_alpha[2,2] ~ dbeta(1,1)
##Argos priors##
#longitudinal argos precision, from Jonsen 2005, 2016, represented as precision not sd
#by argos class
argos_sigma[1] <- 11.9016
argos_sigma[2] <- 10.2775
argos_sigma[3] <- 1.228984
argos_sigma[4] <- 2.162593
argos_sigma[5] <- 3.885832
argos_sigma[6] <- 0.0565539
#latitidunal argos precision, from Jonsen 2005, 2016
argos_alpha[1] <- 67.12537
argos_alpha[2] <- 14.73474
argos_alpha[3] <- 4.718973
argos_alpha[4] <- 0.3872023
argos_alpha[5] <- 3.836444
argos_alpha[6] <- 0.1081156
}"
,fill=TRUE)
sink()
## user system elapsed
## 430.962 99.979 5391.222
## # A tibble: 16 x 6
## # Groups: parameter, Behavior [?]
## parameter Behavior sub_state mean upper lower
## <fctr> <fctr> <fctr> <dbl> <dbl> <dbl>
## 1 alpha 1 NA 0.373 0.501 0.271
## 2 alpha 2 NA 0.242 0.310 0.182
## 3 depth_mu 1 1 25.389 27.014 23.940
## 4 depth_mu 1 2 0.000 0.000 0.000
## 5 depth_mu 2 1 236.809 240.453 233.146
## 6 depth_mu 2 2 26.080 27.031 25.069
## 7 depth_tau 1 1 0.001 0.001 0.001
## 8 depth_tau 1 2 0.010 0.010 0.010
## 9 depth_tau 2 1 0.000 0.000 0.000
## 10 depth_tau 2 2 0.003 0.003 0.003
## 11 gamma 1 NA 0.670 0.839 0.506
## 12 gamma 2 NA 0.519 0.662 0.346
## 13 sub_alpha 1 1 1.000 1.000 1.000
## 14 sub_alpha 1 2 0.000 0.000 0.000
## 15 sub_alpha 2 1 0.896 0.908 0.883
## 16 sub_alpha 2 2 0.105 0.118 0.092
## # A tibble: 5 x 3
## # Groups: state [?]
## state sub_state n
## <dbl> <dbl> <int>
## 1 1 1 50
## 2 1 NA 1
## 3 2 1 90
## 4 2 2 53
## 5 2 NA 8
Grouped by stage
Lines connect individuals
The goodness of fit is a measured as chi-squared. The expected value is compared to the observed value of the actual data. In addition, a replicate dataset is generated from the posterior predicted intensity. Better fitting models will have lower discrepancy values and be Better fitting models are smaller values and closer to the 1:1 line. A perfect model would be 0 discrepancy. This is unrealsitic given the stochasticity in the sampling processes. Rather, its better to focus on relative discrepancy. In addition, a model with 0 discrepancy would likely be seriously overfit and have little to no predictive power.
## # A tibble: 1 x 2
## `mean(E)` `var(Enew)`
## <dbl> <dbl>
## 1 226.8111 140543.1
## # A tibble: 1 x 2
## `mean(E)` `var(Enew)`
## <dbl> <dbl>
## 1 228.958 1031115